The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
早期退出是提高深网推理效率的有效范例。通过构建具有不同资源需求的分类器(出口),此类网络可以在早期出口处输出简单的样本,从而消除了执行更深层的需求。尽管现有作品主要关注多EXIT网络的建筑设计,但此类模型的培训策略在很大程度上没有探索。当前的最新模型在培训期间对所有样品进行了相同的处理。但是,在测试过程中的早期外观行为被忽略了,从而导致训练和测试之间存在差距。在本文中,我们建议通过样品加权来弥合这一差距。从直觉上讲,简单的样品通常在推理期间在网络早期退出,应该为培训早期分类器提供更多贡献。但是,晚期分类器应强调硬样品的培训(主要是从更深层退出)。我们的工作建议采用一个体重预测网络,以加重每个出口处不同训练样本的损失。这个重量预测网络和骨干模型在具有新的优化目标的元学习框架下共同优化。通过将推断期间的适应性行为带入训练阶段,我们表明拟议的加权机制始终提高分类准确性和推理效率之间的权衡。代码可在https://github.com/leaplabthu/l2w-den上找到。
translated by 谷歌翻译
口语理解(SLU)将自动语音识别(ASR)和自然语言理解(NLU)视为一项统一任务,通常遭受数据稀缺。我们基于元辅助学习来利用ASR和NLU联合培训方法,通过仅利用大量的语音数据来提高低资源SLU任务的性能。这种方法的一个明显优势是,它提供了一个灵活的框架来实施低资源的SLU训练任务,而无需访问任何进一步的语义注释。特别是,NLU模型被视为标签生成网络,以预测文本的意图和插槽标签。多任务网络网络从语音同步训练ASR任务和SLU任务;标签生成网络的预测作为语义目标传递到多任务网络。通过公共CATSLU数据集的实验证明了所提出的算法的效率,该数据集对下游NLU任务产生了更合适的ASR假设。
translated by 谷歌翻译
在各种计算机视觉任务(例如对象检测,实例分段等)中,无监督的域适应至关重要。他们试图减少域偏差诱导的性能下降,同时还促进模型应用速度。域适应对象检测中的先前作品尝试使图像级和实例级别变化对准以最大程度地减少域差异,但是它们可能会使单级功能与图像级域适应中的混合级功能相结合,因为对象中的每个图像中的每个图像检测任务可能不止一个类和对象。为了通过单级对齐获得单级和混合级对齐方式,我们将功能的混合级视为新班级,并建议使用混合级$ h-divergence $,以供对象检测到实现均匀特征对准并减少负转移。然后,还提出了基于混合级$ h-Divergence $的语义一致性特征对齐模型(SCFAM)。为了改善单层和混合级的语义信息并完成语义分离,SCFAM模型提出了语义预测模型(SPM)和语义桥接组件(SBC)。然后根据SPM结果更改PIX域鉴别器损耗的重量,以减少样品不平衡。广泛使用的数据集上的广泛无监督域的适应实验说明了我们所提出的方法在域偏置设置中的强大对象检测。
translated by 谷歌翻译
记住和遗忘机制是人类学习记忆系统中同一硬币的两侧。灵感来自人类脑记忆机制,现代机器学习系统一直在努力通过更好地记住终身学习能力的机器,同时推动遗忘为敌人来克服。尽管如此,这个想法可能只能看到半张图片。直到最近,越来越多的研究人员认为,大脑出生忘记,即忘记是抽象,丰富和灵活的陈述的自然和积极的过程。本文通过人工神经网络积极遗忘机制提出了一种学习模型。主动遗忘机制(AFM)通过“即插即用”遗忘层(P \&PF)引入神经网络,由具有内部调节策略(IRS)的抑制神经元组成,以调整自己的消光率通过横向抑制机制和外部调节策略(ERS)通过抑制机制调节兴奋性神经元的消光速率。实验研究表明,P \&PF提供了令人惊讶的益处:自适应结构,强大的泛化,长期学习和记忆,以及对数据和参数扰动的鲁棒性。这项工作阐明了忘记学习过程的重要性,并提供了新的视角,了解神经网络的潜在机制。
translated by 谷歌翻译
移动网络流量预测是日常网络操作中的关键功能之一。商业移动网络大,异质,复杂,动态。这些内在特征使得移动网络流量预测远离诸如最近的高级算法,例如基于Graph卷积网络的预测方法和各种关注机制,也已经证明是在车辆交通预测中成功的。在本文中,我们将问题作为空间序列预测任务。我们提出了一种新的深度学习网络架构,自适应多接收领域空间 - 时间图卷积网络(AMF-STGCN),以模拟移动基站的交通动态。 AMF-STGCN扩展了GCN(1)在移动网络中联合建模的复杂空间 - 时间依赖性,(2)应用注意机制捕获异构基站的各种接收领域,(3)基于完全连接的额外解码器引入额外的解码器深网络以多阶段预测征服错误传播挑战。来自两个不同域的四个真实数据集的实验一致地显示AMF-STGCN优于最先进的方法。
translated by 谷歌翻译
我们研究了在联合环境中从积极和未标记的(PU)数据中学习的问题,由于资源和时间的限制,每个客户仅标记其数据集的一小部分。与传统的PU学习中的设置不同,负面类是由单个类组成的,而由客户在联合设置中无法识别的否定样本可能来自客户未知的多个类。因此,在这种情况下,几乎无法应用现有的PU学习方法。为了解决这个问题,我们提出了一个新颖的框架,即使用正面和未标记的数据(FEDPU)联合学习,以通过利用其他客户的标记数据来最大程度地降低多个负面类别的预期风险。我们理论上分析了拟议的FedPU的概括结合。经验实验表明,FedPU比常规监督和半监督联盟的学习方法取得更好的性能。
translated by 谷歌翻译
由于现代硬件的计算能力强烈增加,在大规模数据集上学习的预训练的深度学习模型(例如,BERT,GPT-3)已经显示了它们对传统方法的有效性。巨大进展主要促进了变压器及其变体架构的代表能力。在本文中,我们研究了低级计算机视觉任务(例如,去噪,超级分辨率和派没),并开发了一个新的预先训练的模型,即图像处理变压器(IPT)。为了最大限度地挖掘变压器的能力,我们展示了利用众所周知的想象网基准,以产生大量损坏的图像对。 IPT模型在具有多头和多尾的这些图像上培训。此外,引入了对比度学习,以适应不同的图像处理任务。因此,在微调后,预先训练的模型可以有效地在所需的任务上使用。只有一个预先训练的模型,IPT优于当前的最先进方法对各种低级基准。代码可在https://github.com/huawei-noah/pretrate -ipt和https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/ipt
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译